The AI Regulation in the HR department

Artificial Intelligence (AI) is increasingly gaining ground in the HR department, being used for purposes such as recruitment and personnel administration. In March 2024, the European Parliament adopted the AI Regulation, which aims to establish a legal framework for the development and use of AI systems, ensuring that such systems do not violate fundamental rights such as the right to equality and non-discrimination. The AI regulation is still waiting formal approval from the Council; however, it is expected to be adopted with minor changes.  

According to Article 3(1) of the AI Regulation, an AI system is defined as a ‘machine-based system’ designed to operate with a varying degree of autonomy, capable of adaptation, and able to derive output from the input receives, such as predictions, recommendations or decisions that may affect physical or virtual environments. Thus, for a system to fall under the AI regulation, it must have a degree of autonomy. The AI Regulation does therefore not apply to systems that can only perform automated actions based on rules or inputs received from individuals.  For example, a system that generates a document, such as an employment contract, based on template and input from the HR department regarding the employee’s name, address, etc., would not be considered an AI system covered by the regulation.  

The AI Regulation contains rules for different types of AI systems, which vary according to the level of risk associated with each type of AI system and its use.  

Prohibited uses of AI systems (blacklist) 

The AI Regulation prohibits the use of AI systems for certain purposes. The prohibition includes, inter alia, the evaluation or classification of individuals based on their social behavior or personality traits, which could lead to harmful or unfavorable treatment that is unjustified or disproportionate.  

The prohibition probably does not include the use of AI systems for personality tests, as personality tests are more likely to be considered an assessment of individual’s personality at a specific point rather than a classification of personality traits over time.  

High-risk AI systems  

AI systems classified as high-risk systems must meet strict requirements to ensure that the system is trustworthy, transparent, and accountable. The requirements amongst other things include, the use of risk assessments, ensuring only high-quality data is used, documentation regarding the technical and ethical choices in connection with the system, record keeping on the system’s performance, information to users regarding the nature and purpose of their systems, enabling human oversight and intervention, and ensuring accuracy, robustness, and cybersecurity. AI systems must also be tested before they are put on the market to ensure conformity with the EU rules.  In addition, there is a requirement for the systems to be registered in an EU database that is accessible to the public. Many of the obligations arising from the AI ​​Regulation rest with the provider 

However, users of the AI systems will also have obligations, including ensuring that the data input into the system is relevant, that individuals overseeing the system are competent, and that individuals whose information is processed by the system are informed about the use of AI.  

Among AI systems categorized as high-risk systems are those intended for use in the recruitment or applicant selection, including screening and filtering job applications and evaluating applicants.  

Similarly, AI systems intended for making decisions concerning employment, including promotions and terminations, as well as AI systems related to the monitoring and evaluation performance and behavior in employment situations, are characterized as high-risk.    

Employment law considerations 

The same employment law rules apply to the use of AI systems as to manual personnel administration. Therefore, employers must ensure that the AI system complies with non-discrimination legislation.  

For example, this could be in connection with the announcement of vacancies, where on various platforms it is possible to filter by age, so that the job posting is specifically targeted at certain age groups. Such specific targeting of certain age groups will be in breach of the Discrimination Act.  

When using AI systems for recruitment, special attention must also be paid to indirect discrimination. AI systems typically learn from past behavior and practices and may have learned to perpetuate discrimination patterns, e.g. as in CV filtering. 

General Data Protection Regulation 

The rules of the General Data Protection Regulation do not change with the adoption of the AI Regulation, and the same conditions apply for the processing of personal data using AI systems as for any other processing. Employers must pay particular attention to Article 22 of the General Data Protection Regulation, which states that automatic decisions, including in recruitment (such as CV filtering, where certain applicants with specific characteristics are automatically excluded) are generally prohibited.  

Additionally, when using the AI system, there is almost always a requirement for the employer to conduct an impact assessment, as required by Article 35 of the General Data Protection Regulation. This is required when using new technology if the processing poses a high risk to individuals’ rights and freedoms.  

Entry into force 

When the AI Regulation enters into force, it applies immediately and directly in all Member States. The rules will apply fully 24 months after the entry into force of the Regulation with certain exceptions: the rules on prohibited use of AI systems will apply six months after the entry into force date, the rules on codes of conduct will apply nine months after entry into force, the rules for AI for general use, including administration, will apply 12 months after entry into force, while the rules on obligations for high-risk systems will apply 36 months after entry into force.  

We recommend 

If an employer wants to invest in a new HR system using AI technology, it is important to ensure that the system complies with the AI Regulation, including ensuring that there are no fundamental assumptions in the system that prevent them from being compliant with the AI Regulation. It is also recommended to ensure that the supplier commits to making any necessary changes due to the AI Regulation. 

Another factor that is worth paying attention to is the database from which the AI ​​system is trained. The AI ​​system must be trained in representative material, as algorithms can develop a bias that will be discriminatory due to the system learning from a database which is to narrow. Employers must therefore also ensure that they do not purchase AI systems that obtain input from jurisdictions that have different rules on gender, race, age, etc. than the Danish rules, as there is also a risk that it will develop discriminatory algorithms. 

We regularly provide updates on the AI Regulation.  

This article was first published by our Danish member firm Mette Klingsten Law Firm. For more insights on this topic, please contact our representative Mette Klingsten.